4 research outputs found

    SEGREGATION OF SPEECH SIGNALS IN NOISY ENVIRONMENTS

    Get PDF
    Automatic segregation of overlapping speech signals from single-channel recordings is a challenging problem in speech processing. Similarly, the problem of extracting speech signals from noisy speech is a problem that has attracted a variety of research for several years but is still unsolved. Speech extraction from noisy speech mixtures where the background interference could be either speech or noise is especially difficult when the task is to preserve perceptually salient properties of the recovered acoustic signals for use in human communication. In this work, we propose a speech segregation algorithm that can simultaneously deal with both background noise as well as interfering speech. We propose a feature-based, bottom-up algorithm which makes no assumptions about the nature of the interference or does not rely on any prior trained source models for speech extraction. As such, the algorithm should be applicable for a wide variety of problems, and also be useful for human communication since an aim of the system is to recover the target speech signals in the acoustic domain. The proposed algorithm can be compartmentalized into (1) a multi-pitch detection stage which extracts the pitch of the participating speakers, (2) a segregation stage which teases apart the harmonics of the participating sources, (3) a reliability and add-back stage which scales the estimates based on their reliability and adds back appropriate amounts of aperiodic energy for the unvoiced regions of speech and (4) a speaker assignment stage which assigns the extracted speech signals to their appropriate respective sources. The pitch of two overlapping speakers is extracted using a novel feature, the 2-D Average Magnitude Difference Function, which is also capable of giving a single pitch estimate when the input contains only one speaker. The segregation algorithm is based on a least squares framework relying on the estimated pitch values to give estimates of each speaker's contributions to the mixture. The reliability block is based on a non-linear function of the energy of the estimates, this non-linear function having been learnt from a variety of speech and noise data but being very generic in nature and applicability to different databases. With both single- and multiple- pitch extraction and segregation capabilities, the proposed algorithm is amenable to both speech-in-speech and speech-in-noise conditions. The algorithm is evaluated on several objective and subjective tests using both speech and noise interference from different databases. The proposed speech segregation system demonstrates performance comparable to or better than the state-of-the-art on most of the objective tasks. Subjective tests on the speech signals reconstructed by the algorithm, on normal hearing as well as users of hearing aids, indicate a significant improvement in the perceptual quality of the speech signal after being processed by our proposed algorithm, and suggest that the proposed segregation algorithm can be used as a pre-processing block within the signal processing of communication devices. The utility of the algorithm for both perceptual and automatic tasks, based on a single-channel solution, makes it a unique speech extraction tool and a first of its kind in contemporary technology

    Detection of Irregular Phonation in Speech

    Get PDF
    This work addresses the detection & characterization of irregular phonation in spontaneous speech. While published work tackles this problem as a two-hypothesis problem only in regions of speech with phonation, this work focuses on distinguishing aperiodicity due to frication from that due to irregular voicing. This work also deals with correction of a current pitch tracking algorithm in regions of irregular phonation, where most pitch trackers fail to perform well. Relying on the detection of regions of irregular phonation, an acoustic parameter is developed in order to characterize these regions for speaker identification applications. The detection performance of the algorithm on a clean speech corpus (TIMIT) is seen to be 91.8%, with the percentage of false detections being 17.42%. On telephone speech corpus (NIST 98) database, the detection performance is 89.2%, with the percentage of false detections being 12.8%. The pitch detection accuracy increased from 95.4% to 98.3% for TIMIT, and from94.8% to 97.4% for NIST 98 databases. The creakiness parameter was added to a set of seven acoustic parameters for speaker identification on the NIST 98 database, and the performance was found to be enhanced by 1.5% for female speakers and 0.4% for male speakers for a population of 250 speakers

    Distal renal tubular acidosis with nerve deafness secondary to ATP6B1 gene mutation

    No full text
    Autosomal recessive distal renal tubular acidosis (dRTA) is associated with mutation in the ATP6B1 gene encoding the B1 subunit of H + -ATPase, one of the key membrane transporters for net acid excretion of α-intercalated cells of medullary collecting ducts. Sensori-neural deafness frequently accompanies this type of dRTA. We herewith describe a patient who had distinct features of dRTA with bilateral sensori-neural hearing loss and ATP6B1 mutation. This is a rare entity
    corecore